6,738 research outputs found

    Message from the Administration

    Full text link

    Dublin City University at the TREC 2006 terabyte track

    Get PDF
    For the 2006 Terabyte track in TREC, Dublin City Universityā€™s participation was focussed on the ad hoc search task. As per the pervious two years [7, 4], our experiments on the Terabyte track have concentrated on the evaluation of a sorted inverted index, the aim of which is to sort the postings within each posting list in such a way, that allows only a limited number of postings to be processed from each list, while at the same time minimising the loss of effectiveness in terms of query precision. This is done using the FĆ­srĆ©al search system, developed at Dublin City University [4, 8]

    Incidence of wrong-site surgery list errors for a 2-year period in a single national health service board

    Get PDF
    Introduction: Wrong-site/side surgical "never events" continue to cause considerable harm to patients, healthcare professionals, and organizations within the United Kingdom. Incidence has remained static despite the mandatory introduction of surgical checklists. Operating theater list errors have been identified as a regular contributor to these never events. The aims of the study were to identify and to learn from the incidence of wrong-site/side list errors in a single National Health Service board. Methods: The study was conducted in a single National Health Service board serving a population of approximately 300,000. All theater teams systematically recorded errors identified at the morning theater brief or checklist pause as part of a board-wide quality improvement project. Data were reviewed for a 2-year period from May 2013 to April 2015, and all episodes of wrong-site/side list errors were identified for analysis. Results: No episodes of wrong-site/side surgery were recorded for the study period. A total of 86 wrong-site/side list errors were identified in 29,480 cases (0.29%). There was considerable variation in incidence between surgical specialties with ophthalmology recording the largest proportion of errors per number of surgical cases performed (1 in 87 cases) and gynecology recording the smallest proportion (1 in 2671 cases). The commonest errors to occur were "wrong-side" list errors (62/86, 72.1%). Discussion: This is the first study to identify incidence of wrong-site/site list errors in the United Kingdom. Reducing list errors should form part of a wider risk reduction strategy to reduce wrong-site/side never events. Human factors barrier management analysis may help identify the most effective checks and controls to reduce list errors incidence, whereas resilience engineering approaches should help develop understanding of how to best capture and neutralize errors

    ā€˜Me eatee him upā€™: cannibal appetites in Cloud Atlas and Robinson Crusoe

    Get PDF
    This article considers the anthropocentric construction of the human subject in Defoeā€™s novel Robinson Crusoe, paying close attention to formal structure and the novelā€™s thematic concern with, and confusion of, both eating animals and cannibalism. By connecting Crusoeā€™s formal structure and thematic concerns with Mitchellā€™s Cloud Atlas, which opens with Crusoeā€™s central motif, I will go on to show that this novel attempts to imagine, and indeed formally enacts, a likely conclusion to the anthropocentric colonial expedition Robinson Crusoe is often said to represent. In this context cannibalism, as both a literal practice and a metaphor for consumer capitalism, is considered as part and parcel of a potentially catastrophic abstraction of the human from the ecosystem. As such, Cloud Atlas can be read as a minatory novel which attempts to work as a corrective to the consequences of a colonial adventure predicated not simply upon ā€˜othernessā€™ amongst humans but also between humans and the wider environment

    Index ordering by query-independent measures

    Get PDF
    There is an ever-increasing amount of data that is being produced from various data sources ā€” this data must then be organised effectively if we hope to search though it. Traditional information retrieval approaches search through all available data in a particular collection in order to find the most suitable results, however, for particularly large collections this may be extremely time consuming. Our purposed solution to this problem is to only search a limited amount of the collection at query-time, in order to speed this retrieval process up. Although, in doing this we aim to limit the loss in retrieval efficacy (in terms of accuracy of results). The way we aim to do this is to firstly identify the most ā€œimportantā€ documents within the collection, and then sort the documents within the collection in order of their "importanceā€ in the collection. In this way we can choose to limit the amount of information to search through, by eliminating the documents of lesser importance, which should not only make the search more efficient, but should also limit any loss in retrieval accuracy. In this thesis we investigate various different query-independent methods that may indicate the importance of a document in a collection. The more accurate the measure is at determining an important document, the more effectively we can eliminate documents from the retrieval process - improving the query-throughput of the system, as well as providing a high level of accuracy in the returned results. The effectiveness of these approaches are evaluated using the datasets provided by the terabyte track at the Text REtreival Conference (TREC)

    Top subset retrieval on large collections using sorted indices

    Get PDF
    In this poster we describe alternative inverted index structures that reduce the time required to process queries, produce a higher query throughput and still return high quality results to the end user. We give results based upon the TREC Terabyte dataset showing improvements that these indices give in terms of effectiveness and efficiency

    Coping with noise in a real-world weblog crawler and retrieval system

    Get PDF
    In this paper we examine the effects of noise when creating a real-world weblog corpus for information retrieval. We focus on the DiffPost (Lee et al. 2008) approach to noise removal from blog pages, examining the difficulties encountered when crawling the blogosphere during the creation of a real-world corpus of blog pages. We introduce and evaluate a number of enhancements to the original DiffPost approach in order to increase the robustness of the algorithm. We then extend DiffPost by looking at the anchor-text to text ratio, and dis- cover that the time-interval between crawls is more impor- tant to the successful application of noise-removal algorithms within the blog context, than any additional improvements to the removal algorithm itself

    Dublin City University at the TREC 2005 terabyte track

    Get PDF
    For the 2005 Terabyte track in TREC Dublin City University participated in all three tasks: Adhoc, EĀ±ciency and Named Page Finding. Our runs for TREC in all tasks were primarily focussed on the application of "Top Subset Retrieval" to the Terabyte Track. This retrieval utilises different types of sorted inverted indices so that less documents are processed in order to reduce query times, and is done so in a way that minimises loss of effectiveness in terms of query precision. We also compare a distributed version of our FĆ­srĆ©al search system [1][2] against the same system deployed on a single machine
    • ā€¦
    corecore